70 research outputs found

    Comment:The Next Frontier: Prosody Research Gets Interpersonal

    Get PDF
    Neurocognitive models (e.g., Schirmer & Kotz, 2006) have helped to characterize how listeners incrementally derive meaning from vocal expressions of emotion in spoken language, what neural mechanisms are involved at different processing stages, and their relative time course. But how can these insights be applied to communicative situations in which prosody serves a predominantly interpersonal function? This comment examines recent data highlighting the dynamic interplay of prosody and language, when vocal attributes serve the sociopragmatic goals of the speaker or reveal interpersonal information that listeners use to construct a mental representation of what is being communicated. Our comment serves as a beacon to researchers interested in how the neurocognitive system “makes sense” of socioemotive aspects of prosody

    Recognizing Emotions in a Foreign Language

    Get PDF
    Expressions of basic emotions (joy, sadness, anger, fear, disgust) can be recognized pan-culturally from the face and it is assumed that these emotions can be recognized from a speaker's voice, regardless of an individual's culture or linguistic ability. Here, we compared how monolingual speakers of Argentine Spanish recognize basic emotions from pseudo-utterances ("nonsense speech") produced in their native language and in three foreign languages (English, German, Arabic). Results indicated that vocal expressions of basic emotions could be decoded in each language condition at accuracy levels exceeding chance, although Spanish listeners performed significantly better overall in their native language ("in-group advantage"). Our findings argue that the ability to understand vocally-expressed emotions in speech is partly independent of linguistic ability and involves universal principles, although this ability is also shaped by linguistic and cultural variables

    Dynamic Facial Expressions Prime the Processing of Emotional Prosody

    Get PDF
    Evidence suggests that emotion is represented supramodally in the human brain. Emotional facial expressions, which often precede vocally expressed emotion in real life, can modulate event-related potentials (N100 and P200) during emotional prosody processing. To investigate these cross-modal emotional interactions, two lines of research have been put forward: cross-modal integration and cross-modal priming. In cross-modal integration studies, visual and auditory channels are temporally aligned, while in priming studies they are presented consecutively. Here we used cross-modal emotional priming to study the interaction of dynamic visual and auditory emotional information. Specifically, we presented dynamic facial expressions (angry, happy, neutral) as primes and emotionally-intoned pseudo-speech sentences (angry, happy) as targets. We were interested in how prime-target congruency would affect early auditory event-related potentials, i.e., N100 and P200, in order to shed more light on how dynamic facial information is used in cross-modal emotional prediction. Results showed enhanced N100 amplitudes for incongruently primed compared to congruently and neutrally primed emotional prosody, while the latter two conditions did not significantly differ. However, N100 peak latency was significantly delayed in the neutral condition compared to the other two conditions. Source reconstruction revealed that the right parahippocampal gyrus was activated in incongruent compared to congruent trials in the N100 time window. No significant ERP effects were observed in the P200 range. Our results indicate that dynamic facial expressions influence vocal emotion processing at an early point in time, and that an emotional mismatch between a facial expression and its ensuing vocal emotional signal induces additional processing costs in the brain, potentially because the cross-modal emotional prediction mechanism is violated in case of emotional prime-target incongruency

    Impaired neural processing of dynamic faces in left-onset Parkinson's disease

    Get PDF
    Parkinson's disease (PD) affects patients beyond the motor domain. According to previous evidence, one mechanism that may be impaired in the disease is face processing. However, few studies have investigated this process at the neural level in PD. Moreover, research using dynamic facial displays rather than static pictures is scarce, but highly warranted due to the higher ecological validity of dynamic stimuli. In the present study we aimed to investigate how PD patients process emotional and non-emotional dynamic face stimuli at the neural level using event-related potentials. Since the literature has revealed a predominantly right-lateralized network for dynamic face processing, we divided the group into patients with left (LPD) and right (RPD) motor symptom onset (right versus left cerebral hemisphere predominantly affected, respectively). Participants watched short video clips of happy, angry, and neutral expressions and engaged in a shallow gender decision task in order to avoid confounds of task difficulty in the data. In line with our expectations, the LPD group showed significant face processing deficits compared to controls. While there were no group differences in early, sensory-driven processing (fronto-central N1 and posterior P1), the vertex positive potential, which is considered the fronto-central counterpart of the face-specific posterior N170 component, had a reduced amplitude and delayed latency in the LPD group. This may indicate disturbances of structural face processing in LPD. Furthermore, the effect was independent of the emotional content of the videos. In contrast, static facial identity recognition performance in LPD was not significantly different from controls, and comprehensive testing of cognitive functions did not reveal any deficits in this group. We therefore conclude that PD, and more specifically the predominant right-hemispheric affection in left-onset PD, is associated with impaired processing of dynamic facial expressions, which could be one of the mechanisms behind the often reported problems of PD patients in their social lives

    Seeing Emotion with Your Ears: Emotional Prosody Implicitly Guides Visual Attention to Faces

    Get PDF
    Interpersonal communication involves the processing of multimodal emotional cues, particularly facial expressions (visual modality) and emotional speech prosody (auditory modality) which can interact during information processing. Here, we investigated whether the implicit processing of emotional prosody systematically influences gaze behavior to facial expressions of emotion. We analyzed the eye movements of 31 participants as they scanned a visual array of four emotional faces portraying fear, anger, happiness, and neutrality, while listening to an emotionally-inflected pseudo-utterance (Someone migged the pazing) uttered in a congruent or incongruent tone. Participants heard the emotional utterance during the first 1250 milliseconds of a five-second visual array and then performed an immediate recall decision about the face they had just seen. The frequency and duration of first saccades and of total looks in three temporal windows ([0–1250 ms], [1250–2500 ms], [2500–5000 ms]) were analyzed according to the emotional content of faces and voices. Results showed that participants looked longer and more frequently at faces that matched the prosody in all three time windows (emotion congruency effect), although this effect was often emotion-specific (with greatest effects for fear). Effects of prosody on visual attention to faces persisted over time and could be detected long after the auditory information was no longer present. These data imply that emotional prosody is processed automatically during communication and that these cues play a critical role in how humans respond to related visual cues in the environment, such as facial expressions

    Phenome-wide association analysis of LDL-cholesterol lowering genetic variants in PCSK9

    Get PDF
    Abstract: Background: We characterised the phenotypic consequence of genetic variation at the PCSK9 locus and compared findings with recent trials of pharmacological inhibitors of PCSK9. Methods: Published and individual participant level data (300,000+ participants) were combined to construct a weighted PCSK9 gene-centric score (GS). Seventeen randomized placebo controlled PCSK9 inhibitor trials were included, providing data on 79,578 participants. Results were scaled to a one mmol/L lower LDL-C concentration. Results: The PCSK9 GS (comprising 4 SNPs) associations with plasma lipid and apolipoprotein levels were consistent in direction with treatment effects. The GS odds ratio (OR) for myocardial infarction (MI) was 0.53 (95% CI 0.42; 0.68), compared to a PCSK9 inhibitor effect of 0.90 (95% CI 0.86; 0.93). For ischemic stroke ORs were 0.84 (95% CI 0.57; 1.22) for the GS, compared to 0.85 (95% CI 0.78; 0.93) in the drug trials. ORs with type 2 diabetes mellitus (T2DM) were 1.29 (95% CI 1.11; 1.50) for the GS, as compared to 1.00 (95% CI 0.96; 1.04) for incident T2DM in PCSK9 inhibitor trials. No genetic associations were observed for cancer, heart failure, atrial fibrillation, chronic obstructive pulmonary disease, or Alzheimer’s disease – outcomes for which large-scale trial data were unavailable. Conclusions: Genetic variation at the PCSK9 locus recapitulates the effects of therapeutic inhibition of PCSK9 on major blood lipid fractions and MI. While indicating an increased risk of T2DM, no other possible safety concerns were shown; although precision was moderate

    Feeling backwards? How temporal order in speech affects the time course of vocal emotion recognition

    No full text
    Recent studies suggest that the time course for recognizing vocal expressions of basic emotion in speech varies significantly by emotion type, implying that listeners uncover acoustic evidence about emotions at different rates in speech (e.g., fear is recognized most quickly whereas happiness and disgust are recognized relatively slowly, Pell and Kotz, 2011). To investigate whether vocal emotion recognition is largely dictated by the amount of time listeners are exposed to speech or the position of critical emotional cues in the utterance, 40 English participants judged the meaning of emotionally-inflected pseudo-utterances presented in a gating paradigm, where utterances were gated as a function of their syllable structure in segments of increasing duration from the end of the utterance (i.e., gated ‘backwards’). Accuracy for detecting six target emotions in each gate condition and the mean identification point for each emotion in milliseconds were analyzed and compared to results from Pell & Kotz (2011). We again found significant emotion-specific differences in the time needed to accurately recognize emotions from speech prosody, and new evidence that utterance-final syllables tended to facilitate listeners’ accuracy in many conditions when compared to utterance-initial syllables. The time needed to recognize fear, anger, sadness, and neutral from speech cues was not influenced by how utterances were gated, although happiness and disgust were recognized significantly faster when listeners heard the end of utterances first. Our data provide new clues about the relative time course for recognizing vocally-expressed emotions within the 400-1200 millisecond time window, while highlighting that emotion recognition from prosody can be shaped by the temporal properties of speech

    The perception and comprehension of intonation by brain-damaged adults in linguistic and affective contexts /

    No full text
    Tasks testing linguistic and affective prosody were administered to nine right-hemisphere-damaged (RHD), ten left-hemisphere-damaged (LHD), and ten age-matched control (NC) subjects. Two tasks measured subjects' abilities to make same/different judgments about prosodic patterns which had been filtered of the linguistic content, while six tasks required subjects to identify typical linguistic or affective meanings for intonation contours. The six identification tasks varied in the amount of linguistic structure available to subjects during auditory perception; stimuli were either filtered of their phonetic content, presented as nonsense utterances, or provided appropriate semantic information which biased the prosodic target. Unilateral damage to either cerebral hemisphere did not impair subjects' ability to discriminate prosodic patterns, or to recognize the affective mood conveyed through prosody. Contrary to expectation, RHD patients performed comparably in both propositional and affective contexts, and thus did not show evidence of a specific disturbance of emotional prosody. LHD patients, however, were differentially impaired on linguistic tasks rather than emotional tasks when compared to the NC group, even when semantic information biased the target response. The results are discussed with respect to theories of lateralized processing of linguistic and affective prosody

    Dynamic emotion processing in Parkinson's disease as a function of channel availability

    No full text
    Parkinson's disease (PD) is linked to impairments for recognizing emotional expressions, although the extent and nature of these communication deficits are uncertain. Here, we compared how adults with and without PD recognize dynamic expressions of emotion in three channels, involving lexical-semantic, prosody, and/or facial cues (each channel was investigated individually and in combination). Results indicated that while emotion recognition increased with channel availability in the PD group, patients performed significantly worse than healthy participants in all conditions. Difficulties processing dynamic emotional stimuli in PD could be linked to striatal dysfunction, which reduces efficient binding of sequential information in the disease. © 2010 Psychology Press
    corecore